Selected Prompt Details
Once a prompt is selected from the Prompt Gallery, users are taken to a detailed view where they can interact with the prompt and customize its settings to suit their specific needs. This page offers a conversation-based interface where users can input their queries or data and view the AI's responses in real time.
Interaction Interface
The prompt interaction interface provides a structured environment where users can input data and receive feedback from the AI model. For instance, when selecting the Audio Diarization prompt, users can input a Python function, and the model will process and return the result in the conversation thread.
Conversation Flow
- User Input: Users can input their data or queries, such as code snippets, audio transcriptions, or text for analysis.
- Model Output: The model generates a response based on the input, such as transcribing the audio, providing feedback on code, or answering a question.
- System Instructions: This field allows users to provide tone or style instructions for the AI model, further refining the output.
Each step in the conversation is clearly marked as User or Model, making it easy to follow the flow of the interaction.
Customizing the Prompt Settings
The right-hand side of the page features the Run Settings panel, where users can adjust various parameters to customize how the AI model behaves:
- Model Selection: Users can choose the AI model from a dropdown list (e.g., Gemini 1.5 Flash), depending on the specific task.
- Token Count: This parameter limits the number of tokens the model can use in its output, helping control the length of the response.
- Temperature: Controls the randomness and creativity of the output. A higher value will result in more varied responses, while a lower value will make the output more deterministic.
- JSON Mode: When enabled, both the input and output are handled in JSON format, which is useful for structured data exchanges.
- Code Execution: This toggle enables or disables code execution within the prompt, allowing for more complex workflows when working with code-based prompts.
- Safety Settings: Allows users to ensure that the prompt output adheres to specific safety parameters, which may be useful for controlling the tone or content.
By adjusting these settings, users can fine-tune how the prompt processes data and generates output, ensuring that it aligns with their project requirements.
Saving and Running the Prompt
After configuring the necessary settings, users can click Run to execute the prompt. The system will process the input according to the chosen model and settings, returning the output in real time. If the configuration is satisfactory, users can save the settings and reuse them for future interactions.
Users can also save a copy of the conversation for further analysis or to use as a reference in other workflows.